Rates of convergence in the source coding theorem, in empirical quantizer design, and in universal lossy source coding

نویسندگان

  • Tamás Linder
  • Gábor Lugosi
  • Kenneth Zeger
چکیده

Abstruct-Rate of convergence results are established for vector quantization. Convergence rates are given for an increasing vector dimension and/or an increasing training set size. In particular, the following results are shown for memoryless realvalued sources with bounded support at transmission rate R: (1) If a vector quantizer with fixed dimension k is designed to minimize the empirical mean-square error (MSE) with respect to m training vectors, then its MSE for the true source converges in expectation and almost surely to the minimum possible MSE as O(\llogm/nl; (2) The MSE of an optimal k-dimensional vector quantizer for the true source converges, as the dimension grows, to the distortion-rate function D(R) as O(J/~); (3) There exists a fixed-rate universal lossy source coding scheme whose per-letter MSE on n real-valued source samples converges in expectation and almost surely to the distortion-rate function D(R) as O(\llog log n / log n ); (4) Consider a training set of II real-valued source samples blocked into vectors of dimension k, and a k-dimension vector quantizer designed to minimize the empirical MSE with respect to the m = [n /k] training ve.ctors. Then the per-letter MSE of this quantizer for the true source converges in expectation and almost surely to the distortion-rate function D(R) as O(Jloglog n /log n 1, if one chooses k = [(l / R)(l e)log n] for any E E (O,l).

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Rates of Convergence in the Source Coding Theorem , in Empirical Quantizer Design , and in Universal Lossy

Rate of convergence results are established for vector quantization. Convergence rates are given for a n increasing vector dimension a n d / o r a n increasing training set size. I n particular, the following results are shown for memoryless realvalued sources with bounded support a t transmission rate R: (1) If a vector quantizer with fixed dimension k is designed to minimize the empirical mea...

متن کامل

Case Study: Empirical Quantizer Design

Now that we have safely made our way through the combinatorial forests of Vapnik–Chervonenkis classes, we will look at an interesting application of the VC theory to a problem in communications engineering: empirical design of vector quantizers. Vector quantization is a technique for lossy data compression (or source coding), so we will first review, at a very brisk pace, the basics of source c...

متن کامل

Universal multiresolution source codes

A multiresolution source code is a single code giving an embedded source description that can be read at a variety of rates and thereby yields reproductions at a variety of resolutions. The resolution of a source reproduction here refers to the accuracy with which it approximates the original source. Thus, a reproduction with low distortion is a “high-resolution” reproduction while a reproducti...

متن کامل

Empirical quantizer design in the presence of source noise or channel noise

The problem of vector quantizer empirical design for noisy channels or for noisy sources is studied. It is shown that the average squared distortion of a vector quantizer designed optimally from observing clean independent and identically distributed (i.i.d.) training vectors converges in expectation, as the training set size grows, to the minimum possible mean-squared error obtainable for quan...

متن کامل

Empirical Quantizer Design in the Presence of Source Noise or Channel Noise - Information Theory, IEEE Transactions on

The problem of vector quantizer empirical design for noisy channels or for noisy sources is studied. It is shown that the average squared distortion of a vector quantizer designed optimally from observing clean independent and identically distributed (i.i.d.) training vectors converges in expectation, as the training set size grows, to the minimum possible mean-squared error obtainable for quan...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • IEEE Trans. Information Theory

دوره 40  شماره 

صفحات  -

تاریخ انتشار 1994